community relations
A Detailed Study on LLM Biases Concerning Corporate Social Responsibility and Green Supply Chains
Ontrup, Greta, Bush, Annika, Pauly, Markus, Aksoy, Meltem
Organizations increasingly use Large Language Models (LLMs) to improve supply chain processes and reduce environmental impacts. However, LLMs have been shown to reproduce biases regarding the prioritization of sustainable business strategies. Thus, it is important to identify underlying training data biases that LLMs pertain regarding the importance and role of sustainable business and supply chain practices. This study investigates how different LLMs respond to validated surveys about the role of ethics and responsibility for businesses, and the importance of sustainable practices and relations with suppliers and customers. Using standardized questionnaires, we systematically analyze responses generated by state-of-the-art LLMs to identify variations. We further evaluate whether differences are augmented by four organizational culture types, thereby evaluating the practical relevance of identified biases. The findings reveal significant systematic differences between models and demonstrate that organizational culture prompts substantially modify LLM responses. The study holds important implications for LLM-assisted decision-making in sustainability contexts.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China (0.04)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Public Relations > Community Relations (1.00)
- Questionnaire & Opinion Survey (0.89)
- Social Sector (1.00)
- Law > Environmental Law (0.34)
Dataset Creation and Baseline Models for Sexism Detection in Hausa
Muhammad, Fatima Adam, Hassan, Shamsuddeen Muhammad, Inuwa-Dutse, Isa
Sexism reinforces gender inequality and social exclusion by perpetuating stereotypes, bias, and discriminatory norms. Noting how online platforms enable various forms of sexism to thrive, there is a growing need for effective sexism detection and mitigation strategies. While computational approaches to sexism detection are widespread in high-resource languages, progress remains limited in low-resource languages where limited linguistic resources and cultural differences affect how sexism is expressed and perceived. This study introduces the first Hausa sexism detection dataset, developed through community engagement, qualitative coding, and data augmentation. For cultural nuances and linguistic representation, we conducted a two-stage user study (n=66) involving native speakers to explore how sexism is defined and articulated in everyday discourse. We further experiment with both traditional machine learning classifiers and pre-trained multilingual language models and evaluating the effectiveness few-shot learning in detecting sexism in Hausa. Our findings highlight challenges in capturing cultural nuance, particularly with clarification-seeking and idiomatic expressions, and reveal a tendency for many false positives in such cases.
- Africa > Nigeria > Jigawa State > Dutse (0.05)
- North America > United States > Virginia (0.04)
- Europe > United Kingdom > England > West Yorkshire > Huddersfield (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
- Research Report > New Finding (0.49)
- Public Relations > Community Relations (0.34)
Language of Persuasion and Misrepresentation in Business Communication: A Textual Detection Approach
Hossen, Sayem, Joti, Monalisa Moon, Rashed, Md. Golam
Business communication digitisation has reorganised the process of persuasive discourse, which allows not only greater transparency but also advanced deception. This inquiry synthesises classical rhetoric and communication psychology with linguistic theory and empirical studies in the financial reporting, sustainability discourse, and digital marketing to explain how deceptive language can be systematically detected using persuasive lexicon. In controlled settings, detection accuracies of greater than 99% were achieved by using computational textual analysis as well as personalised transformer models. However, reproducing this performance in multilingual settings is also problematic and, to a large extent, this is because it is not easy to find sufficient data, and because few multilingual text-processing infrastructures are in place. This evidence shows that there has been an increasing gap between the theoretical representations of communication and those empirically approximated, and therefore, there is a need to have strong automatic text-identification systems where AI-based discourse is becoming more realistic in communicating with humans.
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- Overview (0.88)
- Research Report > New Finding (0.67)
- Public Relations > Community Relations (0.46)
- Research Report > Experimental Study (0.46)
- Information Technology (1.00)
- Banking & Finance (1.00)
- Law (0.68)
- Media > News (0.46)
CF-RAG: A Dataset and Method for Carbon Footprint QA Using Retrieval-Augmented Generation
Zhao, Kaiwen, Balaji, Bharathan, Lee, Stephen
Product sustainability reports provide valuable insights into the environmental impacts of a product and are often distributed in PDF format. These reports often include a combination of tables and text, which complicates their analysis. The lack of standardization and the variability in reporting formats further exacerbate the difficulty of extracting and interpreting relevant information from large volumes of documents. In this paper, we tackle the challenge of answering questions related to carbon footprints within sustainability reports available in PDF format. Unlike previous approaches, our focus is on addressing the difficulties posed by the unstructured and inconsistent nature of text extracted from PDF parsing. To facilitate this analysis, we introduce CarbonPDF-QA, an open-source dataset containing question-answer pairs for 1735 product report documents, along with human-annotated answers. Our analysis shows that GPT-4o struggles to answer questions with data inconsistencies. To address this limitation, we propose CarbonPDF, an LLM-based technique specifically designed to answer carbon footprint questions on such datasets. We develop CarbonPDF by fine-tuning Llama 3 with our training data. Our results show that our technique outperforms current state-of-the-art techniques, including question-answering (QA) systems finetuned on table and text data.
- North America > United States > District of Columbia > Washington (0.05)
- South America > Colombia > Antioquia Department > Medellín (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Public Relations > Community Relations (0.55)
- Research Report > New Finding (0.54)
- Social Sector (0.55)
- Law (0.34)
LLMs to Support a Domain Specific Knowledge Assistant
This work presents a custom approach to developing a domain specific knowledge assistant for sustainability reporting using the International Financial Reporting Standards (IFRS). In this domain, there is no publicly available question-answer dataset, which has impeded the development of a high-quality chatbot to support companies with IFRS reporting. The two key contributions of this project therefore are: (1) A high-quality synthetic question-answer (QA) dataset based on IFRS sustainability standards, created using a novel generation and evaluation pipeline leveraging Large Language Models (LLMs). This comprises 1,063 diverse QA pairs that address a wide spectrum of potential user queries in sustainability reporting. Various LLM-based techniques are employed to create the dataset, including chain-of-thought reasoning and few-shot prompting. A custom evaluation framework is developed to assess question and answer quality across multiple dimensions, including faithfulness, relevance, and domain specificity. The dataset averages a score range of 8.16 out of 10 on these metrics. (2) Two architectures for question-answering in the sustainability reporting domain - a RAG pipeline and a fully LLM-based pipeline. The architectures are developed by experimenting, fine-tuning, and training on the QA dataset. The final pipelines feature an LLM fine-tuned on domain specific data and an industry classification component to improve the handling of complex queries. The RAG architecture achieves an accuracy of 85.32% on single-industry and 72.15% on cross-industry multiple-choice questions, outperforming the baseline approach by 4.67 and 19.21 percentage points, respectively. The LLM-based pipeline achieves an accuracy of 93.45% on single-industry and 80.30% on cross-industry multiple-choice questions, an improvement of 12.80 and 27.36 percentage points over the baseline, respectively.
- Asia (0.67)
- North America > United States (0.27)
- Questionnaire & Opinion Survey (0.87)
- Public Relations > Community Relations (0.75)
- Research Report > New Finding (0.45)
- Transportation (1.00)
- Law (1.00)
- Energy > Renewable (1.00)
- (7 more...)
Glitter or Gold? Deriving Structured Insights from Sustainability Reports via Large Language Models
Bronzini, Marco, Nicolini, Carlo, Lepri, Bruno, Passerini, Andrea, Staiano, Jacopo
Over the last decade, several regulatory bodies have started requiring the disclosure of non-financial information from publicly listed companies, in light of the investors' increasing attention to Environmental, Social, and Governance (ESG) issues. Publicly released information on sustainability practices is often disclosed in diverse, unstructured, and multi-modal documentation. This poses a challenge in efficiently gathering and aligning the data into a unified framework to derive insights related to Corporate Social Responsibility (CSR). Thus, using Information Extraction (IE) methods becomes an intuitive choice for delivering insightful and actionable data to stakeholders. In this study, we employ Large Language Models (LLMs), In-Context Learning, and the Retrieval-Augmented Generation (RAG) paradigm to extract structured insights related to ESG aspects from companies' sustainability reports. We then leverage graph-based representations to conduct statistical analyses concerning the extracted insights. These analyses revealed that ESG criteria cover a wide range of topics, exceeding 500, often beyond those considered in existing categorizations, and are addressed by companies through a variety of initiatives. Moreover, disclosure similarities emerged among companies from the same region or sector, validating ongoing hypotheses in the ESG literature. Lastly, by incorporating additional company attributes into our analyses, we investigated which factors impact the most on companies' ESG ratings, showing that ESG disclosure affects the obtained ratings more than other financial or company data.
- Asia > China (0.28)
- Europe > Italy (0.28)
- Asia > Middle East > Saudi Arabia (0.28)
- (4 more...)
- Research Report (1.00)
- Public Relations > Community Relations (1.00)
- Overview (1.00)
- Social Sector (1.00)
- Information Technology > Security & Privacy (1.00)
- Energy > Power Industry (1.00)
- (5 more...)
Bayesian Optimization of ESG Financial Investments
Garrido-Merchán, Eduardo C., Piris, Gabriel González, Vaca, Maria Coronado
Financial experts and analysts seek to predict the variability of financial markets. In particular, the correct prediction of this variability ensures investors successful investments. However, there has been a big trend in finance in the last years, which are the ESG criteria. Concretely, ESG (Economic, Social and Governance) criteria have become more significant in finance due to the growing importance of investments being socially responsible, and because of the financial impact companies suffer when not complying with them. Consequently, creating a stock portfolio should not only take into account its performance but compliance with ESG criteria. Hence, this paper combines mathematical modelling, with ESG and finance. In more detail, we use Bayesian optimization (BO), a sequential state-of-the-art design strategy to optimize black-boxes with unknown analytical and costly-to compute expressions, to maximize the performance of a stock portfolio under the presence of ESG criteria soft constraints incorporated to the objective function. In an illustrative experiment, we use the Sharpe ratio, that takes into consideration the portfolio returns and its variance, in other words, it balances the trade-off between maximizing returns and minimizing risks. In the present work, ESG criteria have been divided into fourteen independent categories used in a linear combination to estimate a firm total ESG score. Most importantly, our presented approach would scale to alternative black-box methods of estimating the performance and ESG compliance of the stock portfolio. In particular, this research has opened the door to many new research lines, as it has proved that a portfolio can be optimized using a BO that takes into consideration financial performance and the accomplishment of ESG criteria.
- Research Report (1.00)
- Public Relations > Community Relations (0.36)
- Overview > Growing Problem (0.34)
- Energy > Oil & Gas (1.00)
- Banking & Finance > Trading (1.00)
Fintech Industry Must Transform to Help Underserved Communities
Alternative credit options can mean the difference between financial well-being and financial hardship for many borrowers. Fintech advancements such as buy-now-pay-later, plus the combination of credit models driven by artificial intelligence and machine learning, may pave the way for a fairer and more inclusive future of credit. But lessons from the financial crisis ring clear: When only one part of the market is required to comply with regulations, the other will compete by offering disadvantageous and risky products. Regulators are now faced with how to advance a regulatory framework that encourages innovation while protecting consumers. Buy now/pay later options spurred marked industry growth, as well as artificial intelligence and machine learning advances during the pandemic, with implications and improved assistance for underserved communities.
Facial recognition technology: The need for public regulation and corporate responsibility - Microsoft on the Issues
All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people's faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike. Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products.
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology (1.00)
- Government > Immigration & Customs (0.95)
- (2 more...)
Are tech companies responsible for negative outcomes?
America's largest tech companies face a growing backlash over the potentially negative impacts of their strategic decisions and innovations. For example, companies like Apple, Facebook, Google and Microsoft are investing in artificial intelligence (AI) technologies and product roadmaps that will replace millions of jobs during the coming years. Experts in marketing, technology and social awareness say it's time for technology providers to assume greater responsibility for the personal pain that comes along with the collective gain. Emerging technology is at almost perpetual odds with the status quo, but U.S. society is coming to realize that dynamic can lead to job losses, unfair treatment of social services and a stain on civic engagement. The power and influence that some tech companies command is being reevaluated in light of the myriad ways people are being disenfranchised in some way by their actions.
- Information Technology (1.00)
- Government (0.92)